┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get sc Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2022-12-15T00:20:43+08:00 is after 2022-12-12T16:00:42Z
可以通过 下面的命令查看实际证书的有效时间。
1 2 3 4 5 6
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep Not Not Before: Dec 12 16:00:42 2021 GMT Not After : Dec 12 16:00:42 2022 GMT ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Dec 12, 2022 16:00 UTC <invalid> no apiserver Dec 12, 2022 16:00 UTC <invalid> ca no apiserver-etcd-client Dec 12, 2022 16:00 UTC <invalid> etcd-ca no apiserver-kubelet-client Dec 12, 2022 16:00 UTC <invalid> ca no controller-manager.conf Dec 12, 2022 16:00 UTC <invalid> no etcd-healthcheck-client Dec 12, 2022 16:00 UTC <invalid> etcd-ca no etcd-peer Dec 12, 2022 16:00 UTC <invalid> etcd-ca no etcd-server Dec 12, 2022 16:00 UTC <invalid> etcd-ca no front-proxy-client Dec 12, 2022 16:00 UTC <invalid> front-proxy-ca no scheduler.conf Dec 12, 2022 16:00 UTC <invalid> no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Dec 10, 2031 16:00 UTC 8y no etcd-ca Dec 10, 2031 16:00 UTC 8y no front-proxy-ca Dec 10, 2031 16:00 UTC 8y no
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubeadm certs renew all [renew] Reading configuration from the cluster... [renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [renew] Error reading configuration from the Cluster. Falling back to default configuration
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed certificate for serving the Kubernetes API renewed certificate the apiserver uses to access etcd renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for liveness probes to healthcheck etcd renewed certificate for etcd nodes to communicate with each other renewed certificate for serving etcd renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Dec 14, 2023 17:11 UTC 364d no apiserver Dec 14, 2023 17:11 UTC 364d ca no apiserver-etcd-client Dec 14, 2023 17:11 UTC 364d etcd-ca no apiserver-kubelet-client Dec 14, 2023 17:11 UTC 364d ca no controller-manager.conf Dec 14, 2023 17:11 UTC 364d no etcd-healthcheck-client Dec 14, 2023 17:11 UTC 364d etcd-ca no etcd-peer Dec 14, 2023 17:11 UTC 364d etcd-ca no etcd-server Dec 14, 2023 17:11 UTC 364d etcd-ca no front-proxy-client Dec 14, 2023 17:11 UTC 364d front-proxy-ca no scheduler.conf Dec 14, 2023 17:11 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Dec 10, 2031 16:00 UTC 8y no etcd-ca Dec 10, 2031 16:00 UTC 8y no front-proxy-ca Dec 10, 2031 16:00 UTC 8y no ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
执行完此命令之后你需要重启 master 的 静态 Pods。因为动态证书重载目前还不被所有组件和证书支持,所有这项操作是必须的。 静态 Pods 是被本地 kubelet 而不是 API Server 管理, 所以 kubectl 不能用来删除或重启他们。
要重启静态 Pod 你可以临时将清单文件从 /etc/kubernetes/manifests/ 移除并等待 20 秒 (参考 KubeletConfiguration 结构 中的 fileCheckFrequency 值)。 如果 Pod 不在清单目录里,kubelet 将会终止它。 在另一个 fileCheckFrequency 周期之后你可以将文件移回去,为了组件可以完成 kubelet 将重新创建 Pod 和证书更新。
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$kubectl get ns The connection to the server 192.168.26.81:6443 was refused - did you specify the right host or port? ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$tar -xf static.tar ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$ls etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml static.tar
再次登录,提示需要认证
1 2 3 4 5
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$kubectl get ns error: You must be logged in to the server (Unauthorized) ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes] └─$ls admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes] └─$cp admin.conf /root/.kube/config cp:是否覆盖"/root/.kube/config"? y ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes] └─$kubectl get ns NAME STATUS AGE awx Active 60d constraints-cpu-example Active 36d default Active 367d ingress-nginx Active 356d ..............
OK ,拷贝之后,测试成功,可以正常查看命名空间信息,确认下 master 节点静态 pod 的信息